Training feedforward networks with the Marquardt algorithm

نویسندگان

  • Martin T. Hagan
  • Mohammad Bagher Menhaj
چکیده

The Marquardt algorithm for nonlinear least squares is presented and is incorporated into the backpropagation algorithm for training feedforward neural networks. The algorithm is tested on several function approximation problems, and is compared with a conjugate gradient algorithm and a variable learning rate algorithm. It is found that the Marquardt algorithm is much more efficient than either of the other techniques when the network contains no more than a few hundred weights.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient algorithm for training neural networks with one hidden layer

Efficient second order algorithm for training feedforward neural networks is presented. The algorithm has a similar convergence rate as the Lavenberg-Marquardt (LM) method and it is less computationally intensive and requires less memory. This is especially important for large neural networks where the LM algorithm becomes impractical. Algorithm was verified with several examples.

متن کامل

Applying Bacterial Memetic Algorithm for Training Feedforward and Fuzzy Flip-Flop based Neural Networks

In our previous work we proposed some extensions of the Levenberg-Marquardt algorithm; the Bacterial Memetic Algorithm and the Bacterial Memetic Algorithm with Modified Operator Execution Order for fuzzy rule base extraction from inputoutput data. Furthermore, we have investigated fuzzy flip-flop based feedforward neural networks. In this paper we introduce the adaptation of the Bacterial Memet...

متن کامل

An Algorithm for Fast Convergence in Training Neural Networks

In this work, two modifications on Levenberg-Marquardt algorithm for feedforward neural networks are studied. One modification is made on performance index, while the other one is on calculating gradient information. The modified algorithm gives a better convergence rate compared to the standard Levenberg-Marquard (LM) method and is less computationally intensive and requires less memory. The p...

متن کامل

An Efficient Optimization Method for Extreme Learning Machine Using Artificial Bee Colony

Traditional learning algorithms with gradient descent based technique, such as back-propagation (BP) and its variant Levenberg-Marquardt (LM) have been widely used in the training of multilayer feedforward neural networks. The gradient descent based algorithm may converge usually slower than required time in training, since many iterative learning step are needed by such learning algorithm, and...

متن کامل

Grasping Force Prediction for Underactuated Multi-Fingered Hand by Using Artificial Neural Network

In this paper, the feedforward neural network with Levenberg-Marquardt backpropagation training algorithm is used to predict the grasping forces according to the multisensory signals as training samples for specific design of underactuated multifingered hand to avoid the complexity of calculating the inverse kinematics which is appeared through the dynamic modeling of the robotic hand and prepa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE transactions on neural networks

دوره 5 6  شماره 

صفحات  -

تاریخ انتشار 1994